As the world of AI technology changes rapidly, this set of guiding principles can help us as we encounter new and exciting opportunities. We believe AI can be a powerful tool that will impact marketing and communications. We affirm that guidelines and responsible use can protect and bolster the communications industry.
Background and Context
Artificial intelligence refers to software or machines that exhibit abilities normally associated with human intelligence, such as understanding natural language, recognizing patterns, making decisions, and solving complex problems. However, artificial intelligence tools cannot mirror the complexities of human reasoning especially when it comes to moral and ethical considerations. Generative AI tools make predictions based on vast amounts of existing content rather than creating original work, and they cannot replace human creativity. These tools can help enhance productivity, but they will not replace the role of a human-centered approach at the University of Utah.
These guidelines are for communications and marketing professionals who work for the University of Utah. They are not intended to govern other areas of the University such as education and classroom settings, IT, chatbots, etc. They apply to AI tools used to generate content, such as images, text, music, video, and other similar items.
Guiding Principles
- We believe in a human-centered approach to AI that empowers and augments professionals. AI technologies should be assistive, not autonomous.
- We believe that humans remain accountable for all decisions and actions, even when assisted by AI. All AI generated material must be carefully reviewed, approved, edited and overseen by a human author.
- We believe in the critical role of human knowledge, experience, emotion, and imagination in creativity, and we seek to explore and promote emerging career paths and opportunities for creative professionals.
- We believe in the power of communication to educate, influence, and effect change. We commit to never knowingly using generative AI technology to deceive or spread misinformation.
- We commit to verifying the accuracy of information supplied by AI. Nothing can replace the role of human fact checkers, and we take responsibility for any AI-assisted information used in communications materials.
- We recognize that AI generated materials have a high probability of capturing the copyrighted material of another person. Therefore, we will take great care to assure that the final product of any AI generated material has been carefully reviewed, and where necessary modified, to avoid plagiarism.
- We believe that transparency in AI usage is essential in order to maintain the trust of our audiences and stakeholders.
- We believe in the importance of upskilling and reskilling professionals and using AI to increase productivity and efficiency and build more fulfilling careers and lives.
- We believe in partnering with organizations and people who share our principles.
Examples of Acceptable Use of AI
The AI-generated marketplace is dynamic, and it would be impossible to list all allowed and prohibited use cases. These instances, along with the preceding guidelines, are here to provide direction. This is a living document and may change as technology, legal review, and other policies change.
We strongly encourage you to familiarize yourself with available generative AI tools. Practice using them to see how they can help enhance your productivity.
For the purposes of this document, content is meant in its broadest sense and refers to articles, press releases, feature stories, websites, web content, podcasts, videos, etc.
Brainstorming new story ideas: AI can help with fresh story ideas, and it can offer a different perspective or provide constructive feedback on existing concepts for content.
Creating an outline: AI can organize content ideas into a cohesive structure.
Editorial calendar/content plan: AI can help you quickly organize and plan your content and social media calendars.
Helping with headers, headlines, and other content structure and navigation: AI tools can help you identify common themes and provide draft ideas for headlines, subheads, website headers, H3 tags, etc.
Search engine optimization (SEO): AI tools in the marketing and communications realm can quickly assist with keyword research and help analyze factors like readability, keyword usage, and relevancy to improve webpage quality and performance, among other uses.
Helping draft social media posts: AI tools can be a great place to start for a quick first draft of social media posts. They can also help you tailor existing social media posts, comments, etc. to different audiences and drive engagement.
Getting started with research: Ask AI tools to quickly teach you about a concept or topic. From beach volleyball rules to scientific concepts, it can be an outstanding research assistant. However—as stated previously—humans must verify all facts, research, knowledge, and information. Keep in mind, AI tools can “hallucinate” and fabricate information.
Personalizing messaging: AI tools can be adept at helping you rework your content to reach different audiences, such as students, staff, faculty, donors, or the media. It can make suggestions for how to change language, shorten text, emphasize different targeted messages, etc.
Anticipating potential questions or objections: Ask an AI tool to behave like an investigative journalist and suggest potential questions or objections from stakeholders so you can prepare responses in advance.
Assisting as an editor: AI tools can answer questions about Chicago style, AP style, etc. However, keep in mind that some tools may not have access to the most recent version of regularly updated style guides and do not have access to University of Utah editorial guidelines. Serving as a thesaurus: AI tools can help you replace a word or phrase or rework a section of content.
Serving as a thesaurus: AI tools can help you replace a word or phrase or rework a section of content.
Enhancing productivity: Provided privacy policies are followed, AI tools can help with routine tasks such as summarizing interview transcripts, analyzing data, drafting outlines and text for presentations, etc. AI may be able to help draft emails, but it is vital not to rely on AI alone. See this example of Vanderbilt using AI to write an email after mass shooting as a cautionary example of what not to do.
Tightening a piece: Paste content that’s too long into an AI tool and ask it to identify areas you could cut. It will look for places of repetition or where shorter phrases would suffice. Note: these suggestions should still be reviewed by humans, especially since it may suggest changes to quotes or adjust factual information.
Improving an image: Use of content-aware fill functionalities in photo editing software, such as Photoshop, Canva, etc., is permitted within these guidelines for images you already own. Content-aware fill is considered an assistive tool that can enhance productivity and improve the visual quality of marketing and communications materials by seamlessly retouching or removing unwanted elements from images. However, we emphasize that the primary purpose of photo editing tools in this context is to assist and augment human creativity rather than replace it. Additionally, it is crucial to ensure that the use of these tools does not fundamentally alter the context or integrity of the image, maintaining the image's authenticity and intended message.
AI-powered photo filters and editing are permissible for enhancements like improving headshots, but should be used sparingly, as a natural, authentic look is often preferable and less editing is typically more effective.
To reiterate, AI tools are assistive, not autonomous. They are writing and content aids and cannot replace the role and importance of the human in these tasks. Additionally, these are examples of acceptable and prohibited use, not an exhaustive list.
Prohibited Use of AI
AI tools should not be used in any way that would violate existing university standards or policies. For example, creating false communication, spamming/phishing, or manipulating data to create a deceitful impression.
AI tools are not encrypted or private. Do not enter proprietary data, information about students, employees, patients, or other constituents that could be a breach of state or federal privacy laws, including HIPAA, FERPA, or other university policies. Information submitted to many AI tools has the potential to become public and part of the promptable knowledge base.
AI tools should not be used to create entire pieces of written content. It can be used for tasks such as brainstorming, drafting headlines, and targeting messaging. But fully AI-generated content is prohibited at this time.
Unfettered fact-checking is prohibited. AI tools are outstanding research assistants but may “hallucinate” and suggest facts and sources that are entirely inaccurate, though they sound plausible. Again, humans must be central to all research, content creation, and review.
Additionally, AI-generated images, music, audio, and video should not be used in university communications materials. The legality of this practice is under review in the courts, and the ethics are dubious. Instead, AI can be used to help brainstorm art ideas and direction.
Some artists are pursuing legal recourse against organizations using AI-generated art rather than against the AI companies themselves. Real-world example: A large tech company recently shared an AI-generated image on its channels. An artist recognized his work prominently used in the piece and threatened a lawsuit unless the tech company compensated him. Companies that use AI-generated art are being advised to include indemnification clauses in contracts.
This policy will be updated as new laws and legal review are enacted.